Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Healthc Technol Lett ; 11(2-3): 33-39, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38638494

RESUMEN

The integration of Augmented Reality (AR) into daily surgical practice is withheld by the correct registration of pre-operative data. This includes intelligent 3D model superposition whilst simultaneously handling real and virtual occlusions caused by the AR overlay. Occlusions can negatively impact surgical safety and as such deteriorate rather than improve surgical care. Robotic surgery is particularly suited to tackle these integration challenges in a stepwise approach as the robotic console allows for different inputs to be displayed in parallel to the surgeon. Nevertheless, real-time de-occlusion requires extensive computational resources which further complicates clinical integration. This work tackles the problem of instrument occlusion and presents, to the authors' best knowledge, the first-in-human on edge deployment of a real-time binary segmentation pipeline during three robot-assisted surgeries: partial nephrectomy, migrated endovascular stent removal, and liver metastasectomy. To this end, a state-of-the-art real-time segmentation and 3D model pipeline was implemented and presented to the surgeon during live surgery. The pipeline allows real-time binary segmentation of 37 non-organic surgical items, which are never occluded during AR. The application features real-time manual 3D model manipulation for correct soft tissue alignment. The proposed pipeline can contribute towards surgical safety, ergonomics, and acceptance of AR in minimally invasive surgery.

2.
Ann Surg ; 2024 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-38390732

RESUMEN

OBJECTIVE: Develop a pioneer surgical anonymization algorithm for reliable and accurate real-time removal of out-of-body images, validated across various robotic platforms. SUMMARY BACKGROUND DATA / BACKGROUND: The use of surgical video data has become common practice in enhancing research and training. Video sharing requires complete anonymization, which, in the case of endoscopic surgery, entails the removal of all nonsurgical video frames where the endoscope can record the patient or operating room staff. To date, no openly available algorithmic solution for surgical anonymization offers reliable real-time anonymization for video streaming, which is also robotic-platform- and procedure-independent. METHODS: A dataset of 63 surgical videos of 6 procedures performed on four robotic systems was annotated for out-of-body sequences. The resulting 496.828 images were used to develop a deep learning algorithm that automatically detected out-of-body frames. Our solution was subsequently benchmarked against existing anonymization methods. In addition, we offer a post-processing step to enhance the performance and test a low-cost setup for real-time anonymization during live surgery streaming. RESULTS: Framewise anonymization yielded an ROC AUC-score of 99.46% on unseen procedures, increasing to 99.89% after post-processing. Our Robotic Anonymization Network (ROBAN) outperforms previous state-of-the-art algorithms, even on unseen procedural types, despite the fact that alternative solutions are explicitly trained using these procedures. CONCLUSIONS: Our deep learning model ROBAN offers reliable, accurate, and safe real-time anonymization during complex and lengthy surgical procedures regardless of the robotic platform. The model can be used in real-time for surgical live streaming and is openly available.

3.
Diagnostics (Basel) ; 13(21)2023 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-37958283

RESUMEN

(1) Background: Surgical phases form the basic building blocks for surgical skill assessment, feedback, and teaching. The phase duration itself and its correlation with clinical parameters at diagnosis have not yet been investigated. Novel commercial platforms provide phase indications but have not been assessed for accuracy yet. (2) Methods: We assessed 100 robot-assisted partial nephrectomy videos for phase durations based on previously defined proficiency metrics. We developed an annotation framework and subsequently compared our annotations to an existing commercial solution (Touch Surgery, Medtronic™). We subsequently explored clinical correlations between phase durations and parameters derived from diagnosis and treatment. (3) Results: An objective and uniform phase assessment requires precise definitions derived from an iterative revision process. A comparison to a commercial solution shows large differences in definitions across phases. BMI and the duration of renal tumor identification are positively correlated, as are tumor complexity and both tumor excision and renorrhaphy duration. (4) Conclusions: The surgical phase duration can be correlated with certain clinical outcomes. Further research should investigate whether the retrieved correlations are also clinically meaningful. This requires an increase in dataset sizes and facilitation through intelligent computer vision algorithms. Commercial platforms can facilitate this dataset expansion and help unlock the full potential, provided that the phase annotation details are disclosed.

4.
Eur Urol ; 84(1): 86-91, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36941148

RESUMEN

Several barriers prevent the integration and adoption of augmented reality (AR) in robotic renal surgery despite the increased availability of virtual three-dimensional (3D) models. Apart from correct model alignment and deformation, not all instruments are clearly visible in AR. Superimposition of a 3D model on top of the surgical stream, including the instruments, can result in a potentially hazardous surgical situation. We demonstrate real-time instrument detection during AR-guided robot-assisted partial nephrectomy and show the generalization of our algorithm to AR-guided robot-assisted kidney transplantation. We developed an algorithm using deep learning networks to detect all nonorganic items. This algorithm learned to extract this information for 65 927 manually labeled instruments on 15 100 frames. Our setup, which runs on a standalone laptop, was deployed in three different hospitals and used by four different surgeons. Instrument detection is a simple and feasible way to enhance the safety of AR-guided surgery. Future investigations should strive to optimize efficient video processing to minimize the 0.5-s delay currently experienced. General AR applications also need further optimization, including detection and tracking of organ deformation, for full clinical implementation.


Asunto(s)
Realidad Aumentada , Aprendizaje Profundo , Procedimientos Quirúrgicos Robotizados , Robótica , Cirugía Asistida por Computador , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos , Imagenología Tridimensional/métodos
5.
Surg Endosc ; 36(11): 8533-8548, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35941310

RESUMEN

BACKGROUND: Artificial intelligence (AI) holds tremendous potential to reduce surgical risks and improve surgical assessment. Machine learning, a subfield of AI, can be used to analyze surgical video and imaging data. Manual annotations provide veracity about the desired target features. Yet, methodological annotation explorations are limited to date. Here, we provide an exploratory analysis of the requirements and methods of instrument annotation in a multi-institutional team from two specialized AI centers and compile our lessons learned. METHODS: We developed a bottom-up approach for team annotation of robotic instruments in robot-assisted partial nephrectomy (RAPN), which was subsequently validated in robot-assisted minimally invasive esophagectomy (RAMIE). Furthermore, instrument annotation methods were evaluated for their use in Machine Learning algorithms. Overall, we evaluated the efficiency and transferability of the proposed team approach and quantified performance metrics (e.g., time per frame required for each annotation modality) between RAPN and RAMIE. RESULTS: We found a 0.05 Hz image sampling frequency to be adequate for instrument annotation. The bottom-up approach in annotation training and management resulted in accurate annotations and demonstrated efficiency in annotating large datasets. The proposed annotation methodology was transferrable between both RAPN and RAMIE. The average annotation time for RAPN pixel annotation ranged from 4.49 to 12.6 min per image; for vector annotation, we denote 2.92 min per image. Similar annotation times were found for RAMIE. Lastly, we elaborate on common pitfalls encountered throughout the annotation process. CONCLUSIONS: We propose a successful bottom-up approach for annotator team composition, applicable to any surgical annotation project. Our results set the foundation to start AI projects for instrument detection, segmentation, and pose estimation. Due to the immense annotation burden resulting from spatial instrumental annotation, further analysis into sampling frequency and annotation detail needs to be conducted.


Asunto(s)
Laparoscopía , Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Inteligencia Artificial , Nefrectomía/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...